95 research outputs found

    Beyond KernelBoost

    Get PDF
    In this Technical Report we propose a set of improvements with respect to the KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with a scheme inspired by Auto-Context, but that is suitable in situations where the lack of large training sets poses a potential problem of overfitting. The aim is to capture the interactions between neighboring image pixels to better regularize the boundaries of segmented regions. As in Auto-Context [Tu et al., PAMI 2009] the segmentation process is iterative and, at each iteration, the segmentation results for the previous iterations are taken into account in conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009], we organize our recursion so that the classifiers can progressively focus on difficult-to-classify locations. This lets us exploit the power of the decision-tree paradigm while avoiding over-fitting. In the context of this architecture, KernelBoost represents a powerful building block due to its ability to learn on the score maps coming from previous iterations. We first introduce two important mechanisms to empower the KernelBoost classifier, namely pooling and the clustering of positive samples based on the appearance of the corresponding ground-truth. These operations significantly contribute to increase the effectiveness of the system on biomedical images, where texture plays a major role in the recognition of the different image components. We then present some other techniques that can be easily integrated in the KernelBoost framework to further improve the accuracy of the final segmentation. We show extensive results on different medical image datasets, including some multi-label tasks, on which our method is shown to outperform state-of-the-art approaches. The resulting segmentations display high accuracy, neat contours, and reduced noise

    Prevalence and fatality rates of COVID-19: What are the reasons for the wide variations worldwide?

    Get PDF
    This article is made available for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic

    Is Sparsity Really Relevant for Image Classification?

    Get PDF
    Recent years have seen an increasing interest in sparseness constraints for image classiïŹcation and object recognition, probably motivated by the evidence of sparse representations internal in the primate visual cortex. It is still unclear, however, whether or not sparsity helps classiïŹcation. In this paper we analyze the image classiïŹcation task on CIFAR-10, a very challenging dataset, and evaluate the impact of sparseness on the recognition rate using both standard and learned ïŹlter banks in a modular architecture. In our experiments, enforcing sparsity constraints is not required at run-time, since no performance improvement has been reported by indiscriminatively sparsifying the descriptors. This observation complies with the most recent ïŹndings on human visual cortex suggesting that a feed-forward mechanism underlies object recognition, and is of practical interest, as enforcing these constraints can have a heavy computational cost. Our best method outperforms the state-of-the-art, from 64.84% success rate on color images to 71.53% on grayscale images

    Learning Separable Filters

    Get PDF
    Learning filters to produce sparse image representations in terms of overcomplete dictionaries has emerged as a powerful way to create image features for many different purposes. Unfortunately, these filters are usually both nu-merous and non-separable, making their use computation-ally expensive. In this paper, we show that such filters can be computed as linear combinations of a smaller number of separable ones, thus greatly reducing the computational complexity at no cost in terms of performance. This makes filter learning approaches practical even for large images or 3D volumes, and we show that we significantly outperform state-of-the-art methods on the linear structure extraction task, in terms of both accuracy and speed. Moreover, our approach is gen-eral and can be used on generic filter banks to reduce the complexity of the convolutions. 1

    Learning Separable Filters

    Get PDF
    Learning filters to produce sparse image representations in terms of overcomplete dictionaries has emerged as a powerful way to create image features for many different purposes. Unfortunately, these filters are usually both numerous and non-separable, making their use computationally expensive. In this paper, we show that such filters can be computed as linear combinations of a smaller number of separable ones, thus greatly reducing the computational complexity at no cost in terms of performance. This makes filter learning approaches practical even for large images or 3D volumes, and we show that we significantly outperform state-of-the-art methods on the tubular structure extraction task, in terms of both accuracy and speed. Moreover, our approach is general and can be used on generic convolutional filter banks to reduce the complexity of the feature extraction step
    • 

    corecore